The Traditional Future: A Computational Theory of Library Research

نویسنده

  • Andrew Abbott
چکیده

I argue that library research should be conceived as a particular kind of research system, in contrast to more familiar systems like standard social scientific research (SSSR). Unlike SSSR, library research is based on nonelicited sources, recursively used and multiply ordered. It employs the associative algorithms of reading and browsing as opposed to the measurement algorithms of SSSR. Unlike SSSR it is non-standardized, non-sequential, and artisanally organized, deriving crucial power from multitasking. Taken together, these facts imply that as a larger structure library research has a neural net architecture as opposed to the von Neumann architecture of SSSR. This architecture is probably optimal given library research's chief aim, which is less finding truth than filling a space of possible interpretations. From these various considerations it follows that faster is not necessarily better in library research, with obvious implications for library technologization. Other implications of this computational theory of library research are also explored. Among the most important questions raised by the current revolution in libraries is that of the effect of the various new library technologies on library scholarship as a whole. Surprisingly, there is no serious theoretical reflection on this topic. Most writers focus their attention only on the new techniques themselves: the research tasks that are newly possible or that can now be accomplished faster than ever before. No one asks whether there are sound theoretical reasons for thinking that faster is better or that the newly possible work will lead to improvement in library-based scholarship as a whole. Indeed, library-based scholarship as an overall enterprise has seen relatively little study. There is serious empirical study of various search strategies. There is a good deal of writing about digital library research and about teaching various populations of non-scholars how to do library research. There are occasional articles studying the research habits of individual scholars in the library-based disciplines. But there is nothing in the library literature at least about how library scholarship works as a corporate enterprise, much less about the possible overall effects of the revolution in academic libraries on that enterprise. (FN 1) Nor is there much written by academics in the fields most affected by that revolution. There are library research how-to manuals for graduate students in scattered fields, a fact that indicates that humanists as well as some social scientists do sometimes teach library research to their graduate students, introducing them to the critical reading of sources and to the major bibliographical and archival guides for their fields. But seasoned library researchers do not seem to write much about library research methods in general. There are no books on cutting edge library methodology, no equivalents to journals like Sociological Methods and Research or The Journal of Economic Methodology where quantitative social scientists present their latest techniques. Of the 45 articles in JSTOR's 71-journal history collection whose abstracts contain the word "library" or "libraries," none is about the practice of library research. Lest it be thought that "library" is too general a term, there are only eight articles in the JSTOR history collection with the word "bibliography" in their abstracts, and only nine with both the words "reading" and "sources." None of these articles is explicitly about the creating of a bibliography or the reading of sources. In part, this lack of attention may reflect the belief of historians, musicologists, literature professors, and the other library-based scholars that the more global parts of library "methodology" how to assemble sources, how to maintain records and files, how to assess which areas of a project need further library work are not really "library research" proper. These other things are taught in seminars and in direct supervision of dissertations, and perhaps don't seem belong to the library per se. But given the importance of libraries to these disciplines, it is still striking that there is nowhere in them a body of theoretical or even empirical speculation about the nature of the library-based scholarship as a general social form. As for the sociologists, whose business it is to study such social forms, they too have said little. The sociologists of science have been almost completely preoccupied with the natural sciences and their laboratories, ignoring even the social sciences, much less the humanities. Looking at sociology more broadly, the 56 articles in JSTOR's sociology section that have the words "library" or "libraries" in their abstracts include none that is about what we might call the sociology of advanced library research. There is simply no sociological writing on the topic. (FN 2) This extraordinary disattention to the theory and practice of library research is all the more surprising given that there are quite a few theoretical reasons for expecting the present revolution in libraries to have very powerful effects on the scholarship accomplished in and through libraries. For one thing, electronic consortia like JSTOR have brought to nonelite universities vast holdings that used to be the privilege of the elite, a development that could raise or lower the average level of scholarship depending on our assumptions about the impact of an individual researcher's quality on his output. For another, the vast increase of easily indentifiable and retrievable material has swelled reference and citation lists, possibly making it much harder to reach consensus in subfields. For yet another, the huge decline in the cost of accessing materials has probably meant on a simple two-factor production model that today's scholars spend more time accessing scholarship and less time reading it than did their predecessors, a change that could easily lead to declines in overall scholarly quality. One could develop many such arguments. Evaluating these hypotheses, however, is a difficult matter. First of all, we lack an agreed-upon outcome variable. What exactly do we mean by good scholarship overall? Most measures of scholarly productivity at the individual level boil down to bean-counting, either of publication or of citations, and no one with in-depth knowledge of any substantive field thinks that either of these measures has much concept validity. But even if we were to have a valid outcome variable, we don't really have a theory of how advanced library research actually works. Yet such a theory is required if we are to make predictions about how changes in library technologies might actually affect scholarship overall. We do, to be sure, have some ideas about what scholars do in libraries as individual users. But we don't have a theory of how those activities are tied together to make a successful scholarly community. Most of the models for such processes, again, concern the natural sciences, where the Popperian, Kuhnian, and other models are familiar. In short, there is no truly formal or theoretical consideration of library research as an enterprise, and, consequently, no sound basis on which to form a view of whether the current transformation of libraries is good or bad for scholarship. In this paper, I will undertake the first task in order to draw some conclusions about the second. I begin with a brief sketch of standard social scientific methods. By first discussing a reasonably wellknown and well-thought-through system of research, I hope to establish what are the parts of a research system and what are the parameters that determine its functioning. With that framework in hand, I then turn to library research, which I define largely through its contrasts with this other, more familiar body of knowledge procedures, showing how it differs in sources, practices, structures, and aims. This discussion culminates in the argument that the two sets of research practices represent different forms of computation. By pursuing this metaphor, I move the discussion onto neutral grounds in order to escape the usual polemics about libraries. The paper closes by drawing out the implications of the computational theory of library research for the future of both library research and library policy. A word of definition and clarification is useful before beginning. By the phrase "library research," I do not refer to all usage of the library, but only to advanced scholarly usage. Undergraduates may be the most common users of the library because of their huge numbers, but they do not need the immense holdings characteristic of scholarly libraries. And within "advanced scholarly usage," I am referring only to those branches of scholarship whose principal mode of production has been the use of library materials. I am thus talking for the most part about the humanities and the humanistic social sciences: scholars of the various languages and literatures, historians, musicologists, art historians, philosophers, and members of those branches of sociology, anthropology, and political science that draw heavily on library data (historical sociology, for example.) Of course scientists use libraries. But their main mode of production is not library research. I am here interested only in those branches of scholarship that rely heavily on libraries for their "data" itself. Because there is so little prior work, there is no way to avoid confusing the empirical and the normative in what follows. In part, this is a confusion inevitable in any writing about methods. We would not describe standard social scientific methods purely in terms of what social scientists do in practice, but rather in terms of what they ought to do in theory. At the same time, those of us who teach those methods know that in practice we have to teach our students not only precepts about what a good methodologist ought to do in the abstract, but also empirical rules of thumb that can guide their everyday practice. For library research, we lack the abstract precepts, making do at best with the empirical rules of thumb, and in most cases lacking even those: How big a bibliography is big enough, for example? Indeed, one way of understanding what I am doing is to say that I am trying to provide the prescriptive theory the "ought" theory of library research by trying to theorize what library research actually "does," i.e., what it ought to do when it is empiricallybeing a best version of itself. This may be confusing at times, but it is an inevitable concomitant of the early stages of inquiry. 1. Standard Social Science Methods Let me begin by sketching a better-theorized body of research method, one that can serve as a foil against which to develop my concept of library research. I shall use standard social scientific methods for this purpose. Of course the picture I draw here will be stark and unnuanced. But that is another price of thinking theoretically, at least at the ouset. By standard research or standard methods I mean here methods as understood within the broad range of the quantitative social sciences. I will cover the basics of these research methods under three headings: Sources, Practices, and Structures. To begin with sources. Standard social science elicits its data. This elicitation can be by surveys or by interviews. It is most often active elicitation, although much social science is built on data that is either collected on a routine basis (like census data) or simply passively piled up as a part of record-keeping for commercial or other purposes. Data that is actively elicited is standardized and formalized in various ways: it can be selected according to the rules of sampling, for example, and it can be precoded via forced choice instruments. After "cleaning," it can be used directly or further aggregated via data reduction techniques like factor analysis and clustering. These gathered and prepared sources are then subject to the various practices of research. The data are first translated in terms of a set of concepts and measures, which have usually, indeed, governed much of the process of elicitation. These concepts and measures are typically widely shared across a literature, like the notion of stress in studies of social support or like the use of years-in-school as a variable to indicate education. Substantial subliteratures form around the task of improving these concepts and measures, an improvement that may mean better stability over time, or better portability across datasets, or greater plausibility in terms of theory. Once couched in terms of shared concepts and indicators, the translated source data which has now been redacted into variables becomes subject to various methodologies. The majority of these methodologies in social science have the aim of expressing some one (dependent) variable as a function of the rest (independent variables), typically as a linear combination of them. The choice of methodology is to some extent determined by the nature of the dependent variable, although, conversely, that variable can usually be transformed to fit a preferred methodology dichotomized, categorized, logittransformed, and so on. These methodologies are for the most part completely routinized recipes for analysis; one "writes a model," puts the data in, and results come out. But all the same there is plenty of room for modifying these recipes through the handling of the various challenges that data always present to the stringent assumptions of the statistical techniques. Seemingly mechanical in theory, these methodologies nonetheless require a subtle and artful hand in practice. The underlying logic of all of these practices loosely but nonetheless strongly held by most people working in standard social science research is a modified version of the Popperian model of conjectures and refutations (1962). The scholarly intervention is regarded as making a plausible conjecture about the way the world is and then evaluating it against data. If the conjecture is not rejected, then because of its theoretical plausibility it can be added to and possibly reconciled with our stock of conjectures to this point. In a loose sense, that is, the basis of standard social scientific methods is about a correspondence between our model of the world (that a group of independent variables determine a dependent one in a certain way) and the way the numbers fall out in practice. Adding a conjecture to our stock of conjectures is often a simple matter: by adducing a new conjecture an article may set limits on the range of a causal relationship or conversely show its viability in new realms. The question of reconciliation is a more vexed one, however. New results are often inconclusive, and new methodologies can produce results not so much contradictory to earlier ones as incommensurable with them. The problem of reconciliation thus raises the question of how our standard research practices are embedded within a larger structure of research. How, that is, are researchers and their individual research chained together into a larger enterprise? The first larger structure of standard research is the enormous corpus of data both formally elicited and passively collected that the social sciences have used over the years. Much of this is collected in places like the census, the Inter-University Consortium for Political and Social Research (ICPSR) and the National Opinion Research Center (NORC) that maintain data archives. Other data remains in researchers' papers. Sometimes data is published in one form or another, print or on-line. What is important for our present purposes is that in the main this corpus of data is not really systematized and ordered; there is no quantitative equivalent to the historians' National Union Catalogue of Manuscript Collections, for example. The main characteristic of the larger data structure of social science is this unordered, unsystematized quality. It is just a vast pile of used datasets. (FN 3) A second structural quality of the standard research world is specialization and division of labor. Division of labor can obtain at the level of the project; there can be interviewers and coders and analysts and PIs. But it can also obtain at the level of the discipline. There are specialists in sampling and in particular quantitative methodologies as there are specialists in this or that research area. This is an obvious fact and requires no further comment. A third structural quality of standard research is that it is to a considerable extent characterized by a sequential logic. Things have to happen in a certain order. You gather data before you analyze it. You validate your measures before you apply them. You select data with a certain question in mind. Beyond the level of the project, this sequentiality continues. Broad propositions tend to be succeeded by more specific and limited ones. Subparts of large questions must be resolved before attacks on the large questions can produce credible results. To be sure, quantitative social science as a general enterprise is typically advancing on many fronts at once. But within particular research traditions, a sequential logic usually applies, as we see from the common belief in cumulativity. In any specific literature of standard research, early pieces are felt to be less specified, less methodologically careful, less definite. Later results are more specific, more rigorous, more defined. Within such traditions, indeed, later studies often self-consciously replicate earlier studies, even while extending or specifying them. The final quality of standard research taken as a structure is its organization around a search for truth. As I noted earlier, "truth" here means in practice a correspondence between the way we predict the numbers to be, given our theoretical ideas, and the way the numbers actually are when we have gone out and measured the world. This means that standard research is ultimately a form of prediction and search. The truth is thought to be out there in the real world (a, b, and c cause x), and our model is a hypothesis about what that truth is (maybe we think b and c cause x). We measure reality according to our model, and then reality tells us whether we found the truth or not (in this case, that we are a little off in our guess about where truth is.) Standard methods are thus ultimately a formalized version of blindman's bluff; we make educated guesses about where the truth is and then get told whether our guesses are right or wrong. Fundamental to this game is our belief that the truth is somewhere out there in the world to be discovered. There is a "true state of affairs." Our inability to find it may be a problem, but the true state of affairs exists and can in principle be found. One can disagree with various parts of this picture and certainly one could make it much more precise. But overall it is an acceptable thumbnail sketch of how standard research operates in practice in the social sciences. Let me summarize it quickly. The sources of standard research works lie most often in actively elicited data, which is often standardized or concatenated in the process of being collected. The practices of standard research begin with the application of measures and terminologies that are standardized, widely shared (or at least in principle sharable), and usually fairly rigid and specified. They then continue with the application of routine methodological recipes that evaluate the conjectures of researchers by comparing them to the state of the real world. The recipes either accept or reject the conjectures. The larger structures of this standard research world comprise first the enormous collection of used data, which is not particularly systematized or ordered. They comprise second the qualities of sequentiality and division of labor. And they comprise third an overall organization of research around the search for a true state of affairs, which is taken to be "out there" in the real world, but possibly very difficult to find. II Library Research Let me now turn to a similar analysis of library research. As I noted earlier, this is a much less organized and defined system of materials and practices. But we can characterize library research by looking again at sources, practices, and structures, using the sketch just given of standard methods as a guide to the analysis. If as a result library research seems a little too perfectly opposed to what I have called standard research, we can regard that as a heightening of differences for ease of comprehension, not as a claim that some awful chasm divides the two. In fact, they interpenetrate considerably. (FN 4) Note, finally, that it should be recalled from the introduction that the phrase "library research" means use of library materials by expert scholars and in particular by scholars in those disciplines for which use of library materials is the primary mode of intellectual production historians, professors of literature, and so on. The differences start at the beginning, with sources. Library research uses not elicted data, but recorded data things in libraries. Some of this is passive records of the kind we have earlier seen: routine census data or annual reports of companies, governments, and other organizations. But much of it is author-produced primary material of various types novels, autobiographies, religious tracts, philosophical discourses, films, travelogues, ethnographic reports, and so on. What is important about all of this primary material is that it was not elicited by the researcher. It is simply there created by its authors or originators and deposited one way or another in the library. In this sense, the only analogous material in standard research is passively collected quantitative data. But this recorded primary material is only part of the data for library research. An immense portion of the sources of library research consists of prior library research (and indeed prior non-library research as well). Moreover, library research uses this prior work in a very different way than does standard research. In standard research, previous work is of interest largely for its output the conjectures that it authorized or rejected. In library research, prior research is used for all sorts of things in addition to its output. Indeed, it is often ground up into pieces: its primary data can be redefined and reused, its interpretations can be stolen and metamorphosed, its priorities deformed and redirected, its argument ransacked for irrelevancies that are changed into major new positions. Although it is by custom called "secondary material," the prior work recorded in the library is to all intents and purposes yet another form of primary data. We can label this peculiar and intensive use of prior research with a word from computer science. Library research, we can say, is recursive; it can operate on itself. So the sources of library research are quite different from those of standard research: they are not elicited by researchers and they are, in the sense just defined, to a considerable extent recursive. Moreoever, the vast corpus of stuff that makes up the data of library researchers is ordered in a number of important ways. It is classified not only by its author and publisher and date and other facts of provenance, but above all by its subject headings and, in particular, by the most important of these, the call number that gives it a physical location. Unlike the data of the standard researchers, the data of the library researcher is embodied in physical artifacts, a fact I shall return to below. (This is of course changing at the moment, but we are considering the system as it has evolved to the present.) But subject headings are not the only forms of classication and ordering in the library. To subject headings are added back-of-the-book indexes and bibliographic notes, subject bibliographies, encyclopedias and handbooks and other reference works, bibliographical guides, and so on. Most of this indexing and assembling is done by human minds, not by the concordance indexing that drives most of our current search engines. (FN 5) Indeed, an enormous amount of this indexing is implicit in the contents of the data artifacts themselves: one way of understanding any given book based on library-research is as a kind of index to a particular set of other library materials. In short, library research materials have an order imposed on them quite different from the order present in the elicited data of standard methods. In elicited data, the analyst imposes order on what he perceives to be mere human activity by applying certain accepted conventions of measurement and conceptualization. And once a dataset is gathered and used, it goes on a stack of datasets that is not further ordered, classified, or indexed. (There are a few counterexamples, but they prove the rule.) But no one would take the materials in a library as uncognized activity needing to be ordered by certain conventions of measurement and coding. Library materials are already cognized and ordered in dozens of ways. Each book is a particular selection of things by a human agent, and beyond the books themselves, indexers have created dozens of mappings by no means all the same of myriad idiosyncratic subsets of the materials in the library. The sources for library research are in short fundamentally different from those of standard research, above all because of this huge amount of indexing and preorganization, which far surpasses the straightforward application of measurement and coding conventions that is characteristic of standard research. Let me underline that I am not speaking of one, single comprehensive order. There are multiple such orders and deliberately so, an obvious contrast with the strain of standard research towards consistent definitions. In summary, the sources of library research consist of recorded materials, which include prior library research (which thus can be used recursively), and which are ordered by a large number of multiple and crosscutting indexes that govern myriads of subsets of their contents. It helps to have a simple term to refer to library materials: I shall call them "texts." With these texts, library researchers undertake quite different practices than do their standard researcher colleagues. In the first place, library researchers to a great extent lack the well-defined and widely shared concepts and measures that are fundamental to the practices of the standard researchers. The only strictly defined terms in library work are those of certain established large-scale indexes, so-called controlled vocabularies; at the monograph and reference book level controlled vocabularies are created de novo for each new artifact. And the steady shift of terminologies as language drifts inevitably over time particularly with respect to more complex concepts limits the efficacy of the major controlled vocabularies. Some library fields have highly specific and enduring terminologies, to be sure: musicology and historical linguistics are examples. But the vast majority of library research does not involve use of widely shared, well-defined, and stable concepts, nor any other idea of "measure" analogous to that in standard methods. The chief practices of library scholars with texts are reading and browsing. It is these that are in fact the analogue of the standard researchers' measurement, since it is by reading and browsing that library research scholars extract what they want from texts. By pointing to reading and browsing as methodologies, I want to make them unfamiliar, less taken for granted. We need to see the exact analogy between a standard researcher who "measures" the social world using a fairly limited vocabulary of shared concepts and indicators, and a library researcher who browses or reads a text using his or her own and possibly idiosyncratic interpretive armamentarium. In order to understand reading and browsing as the analogues of the measurement and methodology of standard research, it is useful to borrow language from computer science. Measurement, in computer science terms, employs a fairly simple algorithm. A measurement algorithm takes social reality as input and returns a number or category. The shared or at least in-principle sharable nature of the algorithm means that its output is independent of who runs it. Browsing and reading constitute this kind of "measurement" only in a very limited sense. To the extent that we think of a text as having a single fixed meaning, invariant with respect to any differences in the readers, reading the text should return that meaning. In such a case we could think of reading as pure measurement. But texts that have such fixed meaning almost never occur in natural language; they can exist only in things like computer programming that have perfectly controlled vocabulary and syntax. Most texts have multiple and ambiguous meanings, and no texts outside controlled languages have meanings that are invariant with respect to readers. Reading and browsing the two are simply different levels of the same thing thus belong to a different family of algorithms than measurement. They are association algorithms, in which input is taken from text and combined with internal data to produce an output. They are thus inherently nonreplicable because of their dependence on data internal to the reader or browser. A useful way of imagining this is to think about the book-reader technology as compared with the site-surfer technology. In the site-surfer technology, hyperlinks are hard-coded into the page and direct every reader to specific preconnected pages. In the book-reader technology, hyper-links are generated dynamically in the act of reading. They arise by the conjunction of knowledge in the mind of the reader with meanings in the body of the text. Such a system is obviously intensely dependent on the richness of prior knowledge in the minds of readers. And although we can, through things like general examinations, force a certain level of basic background knowledge into the minds of young scholar readers, there will still remain quite large random differences in this background knowledge even between fairly closely comparable scholars. And consequently there will be substantial variation in the outputs of the reading process even between two such scholars. Reading is thus profoundly different from measurement as a research practice, since the latter has replicability as one of its most important qualities. One can "read" with differing levels of attention to detail. Skimming is what we call reading when we pay very little attention to detail. Browsing is what we call reading when we disregard not so much the details of a text as its composed order. Browsing is an association algorithm that ignores the continuous order of the text or, more commonly, that is applied to things that are not continuous composed texts in the first place but that have other kinds of order built into them. One can browse a continous text by flipping through it here and there, but one more often browses things that have an order that is not through-composition. One browses an index or a bibliography, which is ordered alphabetically by main topic and/or author. Or one browses a handbook or other reference work, which is ordered by main topics in some structural or functional relation to one another. Or one browses a shelf, which is ordered by call number. In each case, that is, browsing brings together a prepared mind and a highly ordered source that is (usually) not a continuous text. To some extent, browsing is analogous to what are called hashing algorithms in searching systems; it takes large blocks of material and disregards or inspects them on the basis of simple data checks. At other times, browsing operates via simple association of random elements in the object browsed with random elements in the reader's mind. Each random connection is associated with a probability that it will be useful, and those above a certain level are retained. What is central to all forms of browsing is thus the coming together of a highly organized but not necessarily continuous source object with an equally highly (but quite differently) organized mind. From this is expected to emerge a substantial collection of productive but random combinations. As I have noted, the role of internal knowledge in reading and browsing implies a crucial difference from the measurement that is their equivalent in standard methodology; they are not replicable. Two readers don't get quite the same output from reading a book, and there is no real attempt in library research fields to correct this by improving measures, controlling terminologies, and so on. There is thus no real equality between an English professor presenting a reading of a novel to a class and a sociology professor discussing quantitative indicators of education. The second is interested in and hopes to produce replicability. The first regards replicability as both unachievable and undesirable. Another, equally important difference between the "methods" of library research and those of standard research is that the former lack sequentiality. Even at the single text level, library researchers read straight through only rarely. While some library researchers read background sources straight through at the beginning of a project, it is much more common for a project to begin out of a variety of types of sources of varying levels of detail and relevance, which have been read in no particular order. There is no equivalent in a library research project to the Idea-Question-Data-Method-Result sequence of the standard research program. To be sure, even the latter is in practice something of a rationalization after the fact, but in library research there is no attempt to create even such an imposed, retrospective order. For example, there is no right order in which to read the original sources for a book on, say, the passing of the Reform Bill of 1832, although of course any library researcher would catch the major secondary source J. R. M. Butler's (1914) magisterial book on the first bibliographical pass. Should you read the Parliamentary debates first? or the private correspondence of Earl Grey? or the diaries of the important Tory magnates? It is possible that three quite different but equally important works could be written on the subject starting from those three different beginnings. The sequence does not matter. The rule of thumb in library research is usually to read most heavily at any given time in the area of the largest hole remaining in the argument. The result of that rule is that sources are read in wildly different orders in comparable projects. But the lack of standardization and of sequentiality do not exhaust the differences between library and standard research practices. Library research is also different in that it is customarily artisanal. Each project is done by a single scholar. This obviously goes hand in hand with the lack of standardization. The unity that a project has is the unity of its researcher, since his is the mind that reads and interprets, his is the mind that browses, his is the mind that ultimately puts it all together. Those of us who do this kind of work have all tried to use research assistants. And nearly all of us have given up on them except for those very narrow portions of projects where we can make use of fixed terminologies. No research assistant I have ever hired to compile a bibliography has come up with one half as good as the ones I can make for myself. They simply don't have the same contents in their minds and hence can't perform to my satisfaction the simple associative task that is creating a bibliography. The downside of artisanality is familiar enough; it slows production. Historically, this has been one of the crucial forces driving the social sciences toward the research practices that I have here called standard research. But artisanality has an extremely important upside, which is that it permits an extremely productive form of multitasking. It is best to show this with an example. In a recent library-based project I chose to do my own coding of the lives of every occupational therapist (OT) working in Illinois in 1956. And because the relevant source did not permit immediate extraction of exactly and only what I wanted, I was forced to scan large amounts of interesting but slightly irrelevant material lives of occupational therapists in other states, aspects of career data that I wasn't coding, addresses, and so on. In the eight hours that I spent on the task, I let my browsing self run in the background like a virus checker. What it picked up that is, what I acquired in addition to the coded careers that I wanted were signs of crucial changes in the population of organizations employing OTs, indications of a separate military career trajectory for OTs, two possible hypotheses about the intersection of social class and occupational therapy, and a firm grasp on the marital demography of occupational therapists. Even if my research assistant hadn't wasted his own multitasking capabilities by listening to music (as he usually would, I am convinced), he doesn't know that Easter Seals one of the common employers of OTs was a polio relief organization in the 1950s and that polio virtually disappeared as an American problem in that decade, two facts that taken together show that one of occupational therapy's crucial work jurisdictions was under threat. He doesn't know that there is a mental hospital in Anna, Illinois, which in a complicated way was the key to my marital pattern insight. I saw those things only because I had the requisite knowledge, left over from past projects and in the polio case from simply having lived through the period involved. It should at once be noted that another seasoned researcher might have seen other things and not these. But as we will see below, that doesn't in fact matter. What does matter is that because a single prepared mind does all the work in the typical library project, the prospects for productive multitasking are very, very high. This is all foregone in the standard research project with its often considerable division of labor. So far I have discussed the library research practices of reading and browsing, with their qualities of non-standardization, non-sequentiality, and artisanality. And I have emphasized the multitasking permitted by artisanality. Reading and browsing, I have argued, are the analogues of conceptualization and measurement in standard research, which are by contrast based on standardization and sequentiality, and which consequently permit, and indeed take advantage of, division of labor, both within projects and across projects. What then is the library analogue of methodology proper of regression, or log-linear analysis, or event history methods, the various statistical techniques of the standard researchers? And what is the equivalent of the logical foundation of standard research practices on conjectures and refutations? The quick answer is that there is no such analogue. There is no family of fixed recipes by which library scholars produce their final output. We can at best give a general name to the process by which library researchers assemble their various materials into written texts. I shall give that process the label of "colligation," a term of William Whewell's. It denotes the inductive assemblage of a set of facts under a general conception of some kind. A classic example is Jacob Burckhardt's colligation of the various changes in thirteenth century Italian city states under the heading of Renaissance. Whewell famously attempted a general theory of such induction, but it has had few followers and no successors. (FN 6) Indeed, much of nineteenth century German historiography aspired to a quite different theory of historical writing. According to Ranke's celebrated dictum, history was a matter of search and discovery, a finding out of what had actually happened wie es eigentlich gewesen. This is exactly the model of standard research discussed

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Future-oriented implications of the resilience theory for Iran public libraries

Target: In order to play their role in social developments, public libraries face technological changes and unknown issues that can affect their identity and mission .In reference to the application of novel approaches to reconceptualize the mission of public libraries, this study tries to employ resilience theory to craft a vision for the future of Iran public libraries. Method: This study u...

متن کامل

Information commons: future of academic libraries

Background and Aim: This paper seeks to explore the concepts of the information commons (IC). The paper provides a look at the models and characteristics of the IC. A number of projects of the IC are presented. We will discuss the situation of an existing traditional library. It makes recommendations for converting an existing library space into an IC. A number of Challenges of implementation o...

متن کامل

Research Priorities of Iranian Faculty Members and PhD Students in Medical Library and Information Science

Objective: The field of “medical library and information science” has a broad research platform due to its interdisciplinary nature. However, the lack of resources necessitates the attention to research priorities in this field. This study aims to identify the research priorities of Iranian faculty members and PhD students in medical library and information science. Methods: This is a descript...

متن کامل

Parleda: a Library for Parallel Processing in Computational Geometry Applications

ParLeda is a software library that provides the basic primitives needed for parallel implementation of computational geometry applications. It can also be used in implementing a parallel application that uses geometric data structures. The parallel model that we use is based on a new heterogeneous parallel model named HBSP, which is based on BSP and is introduced here. ParLeda uses two main lib...

متن کامل

Exploring the pattern of predictability, ethics, and managers of the organization with a data theory approach

The present study aims to determine the pattern of predictability, ethics, staff managers and the use of applied ethics knowledge (professional approach) in the Education Organization of Khuzestan Province. Research method from the perspective of purpose, development and application; In terms of data, it is qualitative and in terms of nature and method, data is the foundation. In the first step...

متن کامل

Exploring the pattern of predictability, ethics, and managers of the organization with a data theory approach

The present study aims to determine the pattern of predictability, ethics, staff managers and the use of applied ethics knowledge (professional approach) in the Education Organization of Khuzestan Province. Research method from the perspective of purpose, development and application; In terms of data, it is qualitative and in terms of nature and method, data is the foundation. In the first step...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007